# English Q&A

Llama 3.2 1B Instruct FitnessAssistant
This model is a fine-tuned version of Llama-3.2-1B-Instruct using LoRA (Low-Rank Adaptation) weights, primarily designed to assist in answering various topic-related questions and providing relevant information.
Large Language Model Transformers
L
Soorya03
72
2
Doge 160M Instruct
Apache-2.0
Doge 160M is a small language model based on dynamic masked attention mechanism, trained with supervised fine-tuning (SFT) and direct preference optimization (DPO).
Large Language Model Transformers English
D
SmallDoge
2,223
12
Pythia 2.8b Deduped Synthetic Instruct
Apache-2.0
An instruction generation model fine-tuned on the deduplicated version of Pythia-2.8B, optimized for synthetic instruction datasets
Large Language Model Transformers English
P
lambdalabs
46
6
Bert Base Uncased Finetuned Triviaqa
Apache-2.0
A Q&A model fine-tuned on the TriviaQA dataset based on bert-base-uncased
Question Answering System Transformers
B
FabianWillner
191
0
Roberta Base Squad2
This is a RoBERTa-based extractive question answering model, specifically trained on the SQuAD 2.0 dataset, suitable for English Q&A tasks.
Question Answering System Transformers English
R
ydshieh
31
0
Distilroberta Base Squad2
A Q&A model fine-tuned on the SQuAD v2 dataset based on DistilRoBERTa-base, featuring lightweight and efficient characteristics.
Question Answering System
D
twmkn9
22
0
Bert Large Uncased Whole Word Masking Squad Int8 0001
BERT-large English Q&A model pre-trained with whole word masking and fine-tuned on SQuAD v1.1, quantized to INT8 precision
Question Answering System Transformers
B
dkurt
23
0
Bert Medium Finetuned Squadv2
A Q&A model fine-tuned on the SQuAD2.0 dataset based on the BERT-Medium architecture, designed for resource-constrained environments
Question Answering System English
B
mrm8488
1,399
1
Roberta Large Squad2
MIT
A question-answering model developed based on the roberta-large architecture, specifically trained for the SQuAD 2.0 dataset
Question Answering System English
R
navteca
21
0
Bert Tiny Finetuned Squadv2
A compact Q&A model fine-tuned on the SQuAD2.0 dataset based on Google's BERT-Tiny architecture, with a size of only 16.74MB
Question Answering System English
B
mrm8488
6,327
1
Bert Mini Finetuned Squadv2
BERT-Mini is a small BERT model developed by Google Research, fine-tuned on the SQuAD 2.0 dataset for question-answering tasks, suitable for environments with limited computational resources.
Question Answering System English
B
mrm8488
50
0
T5 Base Squad Qg Ae
A T5-base fine-tuned joint model for question generation and answer extraction, supporting English text processing
Question Answering System Transformers English
T
lmqg
56
0
Bert Base Uncased Squadv1 X1.84 F88.7 D36 Hybrid Filled V1
MIT
This is a Q&A model optimized via nn_pruning library, retaining 50% of original weights, fine-tuned on SQuAD v1 with F1 score reaching 88.72
Question Answering System Transformers English
B
madlag
30
0
Bert Base Uncased Squadv1 X2.01 F89.2 D30 Hybrid Rewind Opt V1
MIT
A Q&A system model fine-tuned on SQuAD v1 based on the BERT-base uncased model, optimized via the nn_pruning library, achieving 2.01x faster inference speed and a 0.69 improvement in F1 score.
Question Answering System Transformers English
B
madlag
22
0
Roberta Base Squad2
MIT
This is a Q&A model based on the RoBERTa architecture, specifically trained on the SQuAD 2.0 dataset, suitable for English Q&A tasks.
Question Answering System English
R
navteca
101
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase